Title: Agent approaches to Security, Trust and Privacy in Pervasive Computing
1Agent approaches to Security, Trust and Privacy
in Pervasive Computing
- Anupam Joshi
- joshi_at_cs.umbc.edu
- http//www.cs.umbc.edu/joshi/
2The Vision
- Pervasive Computing a natural extension of the
present human computing life style - Using computing technologies will be as natural
as using other non-computing technologies (e.g.,
pen, paper, and cups) - Computing services will be available anytime and
anywhere.
3Pervasive Computing
- The most profound technologies are those that
disappear. They weave themselves into the fabric
of everyday life until they are indistinguishable
from it Mark Weiser - Think writing, central heating, electric
lighting, - Not taking your laptop to the beach, or
immersing yourself into a virtual reality
4Today Life is Good.
5Tomorrow We Got Problems!
6Yesterday Gadget Rules
7Today Communication Rules
8Tomorrow Services Will Rule
Thank God! Pervasive Computing is here
9The Brave New World
- Devices increasingly more powerful smaller
cheaper - People interact daily with hundreds of computing
devices (many of them mobile) - Cars
- Desktops/Laptops
- Cell phones
- PDAs
- MP3 players
- Transportation passes
- ? Computing is becoming pervasive
10Securing Data Services
- Security is critical because in many pervasive
applications, we interact with agents that are
not in our home or office environment. - Much of the work in security for distributed
systems is not directly applicable to pervasive
environments - Need to build analogs to trust and reputation
relationships in human societies - Need to worry about privacy!
11Security Challenges
ABC Industries Inc.
ABC Industries Inc., New York
ABC Industries Inc., LA
ABC Industries Inc., Baltimore
What if someone from the New York office visits
the LA office ? How are his rights for access to
resources in the LA Office decided ?
Company wide directory ? Needs dynamic updating
by sysadmin, Violates minimality principles of
security, Not scalable
12Security Challenges
Rights ?
ABC Industries Inc., Baltimore
XYZ Inc, Seattle
How does the ABC system decide what rights to
give a consultant from XYZ Inc ?
How does the ABC system decide what rights to
give a manager from XYZ Inc ?
13Security Challenges
- Example 2 cont.
- Company directory cannot be used
- Cross organizational roles may be meaningless
- Issues specific to pervasive environments
- Central access control is not scalable
- Foreign users or visitors
- Not possible to store their individual access
rights - The role of policies.
14An early policy for agents
- 1 A robot may not injure a human being,
or,through inaction, allow a human being tocome
to harm. - 2 A robot must obey the orders given it by human
beings except where such orders would conflict
with the First Law. - 3 A robot must protect its own existence as long
as such protection does not conflict with the
First or Second Law. - -- Handbook of Robotics, 56th Edition, 2058 A.D.
15On policies, rules and laws
- The interesting thing about Asimovs laws were
that robots did not always strictly follow them. - This is a point of departure from more
traditional hard coded rules like DB access
control, and OS file permissions - For autonomous agents, we need policies that
describe norms of behavior that they should
follow to be good citizens. - So, its natural to worry about issues like
- When an agent is governed by multiple policies,
how does it resolve conflicts among them? - How can we define penalties when agents dont
fulfill their obligations? - How can we relate notions of trust and reputation
to policies?
16The Role of Ontologies
- We will require shared ontologies to support this
framework - A common ontology to represent basic concepts
agents, actions, permissions, obligations,
prohibitions, delegations, credentials, etc. - Appropriate shared ontologies to describe
classes, properties and roles of people and
agents, e.g., - any device owned by TimFinin
- any request from a faculty member at ETZ
- Ontologies to encode policy rules
17ad-hoc networking technologies
- Ad-hoc networking technologies (e.g. Bluetooth)
- Main characteristics
- Short range
- Spontaneous connectivity
- Free, at least for now
- Mobile devices
- Aware of their neighborhood
- Can discover others in their vicinity
- Interact with peers in their neighborhood
- inter-operate and cooperate as needed and as
desired - Both information consumers and providers
- ? Ad-hoc mobile technology challenges the
traditional client/server information access model
18pervasive environment paradigm
- Pervasive Computing Environment
- Ad-Hoc mobile connectivity
- Spontaneous interaction
- Peers
- Service/Information consumers and providers
- Autonomous, adaptive, and proactive
- Data intensive deeply networked environment
- Everyone can exchange information
- Data-centric model
- Some sources generate streams of data, e.g.
sensors - ? Pervasive Computing Environments
19motivation conference scenario
- Smart-room infrastructure and personal devices
can assist an ongoing meeting data exchange,
schedulers, etc.
20imperfect world
- In a perfect world
- everything available and done automatically
- In the real world
- Limited resources
- Battery, memory, computation, connection,
bandwidth - Must live with less than perfect results
- Dumb devices
- Must explicitly be told What, When, and How
- Foreign entities and unknown peers
- So, we really want
- Smart, autonomous, dynamic, adaptive, and
proactive methods to handle data and services
21Securing Ad-Hoc Networks
- MANETs underlie much of pervasive computing
- They bring to fore interesting problems related
to - Open
- Dynamic
- Distributed Systems
- Each node is an independent, autonomous router
- Has to interact with other nodes, some never seen
before. - How do you detect bad guys ?
22Network Level Good Neighbor
- Ad hoc network
- Node A sends packet destined for E, through B.
- B and C make snoop entry (A,E,Ck,B,D,E).
- B and C check for snoop entry.
- Perform Misroute
A
E
B
D
C
23Good Neighbor
- No Broadcast
- Hidden terminal
- Exposed terminal
- DSR vs. AODV
- GLOMOSIM
24Intrusion Detection
- Behaviors
- Selfish
- Malicious
- Detection vs. Reactions
- Shunning bad nodes
- Cluster Voting
- Incentives (Game Theoretic)
- Colluding nodes
- Forgiveness
25Simulation in GlomoSim
- Passive Intrusion Detection
- Individual determination
- No results forwarding
- Active Intrusion Detection
- Cluster Scheme
- Voting
- Result flooding
26GlomoSim Setup
- 16 nodes communication
- 4 nodes sources for 2 CBR streams
- 2 nodes pair CBR streams
- Mobility 0 20 meters/sec
- Pause time 0 15s
- No bad nodes
27Simulation Results
28Preliminary Results
- Passive
- False alarm rate gt 50
- Throughput rate decrease lt 3 additional
- Active
- False alarm rate lt 30
- Throughput rate decrease 25 additional
29challenges is that all? (1)
- Spatio-temporal variation of data and data
sources - All devices in the neighborhood are potential
information providers - Nothing is fixed
- No global catalog
- No global routing table
- No centralized control
- However, each entity can interact with its
neighbors - By advertising / registering its service
- By collecting / registering services of others
30challenges is that all? (2)
- Query may be explicit or implicit, but is often
known up-front - Users sometimes ask explicitly
- e.g. tell me the nearest restaurant that has
vegetarian menu items - The system can guess likely queries based on
declarative information or past behavior - e.g. the user always wants to know the price of
IBM stock
31challenges is that all? (3)
- Since information sources are not known a priori,
schema translations cannot be done beforehand - Resource limited devices
- ? so hope for common, domain specific ontologies
? - Different modes
- Device could interact with only such providers
whose schemas it understands - Device could interact with anyone, and cache the
information in hopes of a translation in the
future. - Device could always try to translate itself
- Prior work in Schema Translation, Ongoing work in
Ontology Mapping.
32challenges is that all? (4)
- Cooperation amongst information sources cannot be
guaranteed - Device has reliable information, but makes it
inaccessible - Devices provides information, which is
unreliable - Once device shares information, it needs the
capability to protect future propagation and
changes to that information
33challenges is that all? (5)
- Need to avoid humans in the loop
- Devices must dynamically "predict" data
importance and utility based on the current
context - The key insight declarative (or inferred)
descriptions help - Information needs
- Information capability
- Constraints
- Resources
- Data
- Answer fidelity
- Expressive Profiles can capture such descriptions
344. our data management architecture
- MoGATU
- Design and implementation consists of
- Data
- Metadata
- Profiles
- Entities
- Communication interfaces
- Information Providers
- Information Consumers
- Information Managers
35MoGATU metadata
- Metadata representation
- To provide information about
- Information providers and consumers,
- Data objects, and
- Queries and answers
- To describe relationships
- To describe restrictions
- To reason over the information
- ? Semantic language
- DAMLOIL / DAML-S
- http//mogatu.umbc.edu/ont/
36MoGATU profile
- Profile
- User preferences, schedule, requirements
- Device constraints, providers, consumers
- Data ownership, restriction, requirements,
process model - Profiles based on BDI models
- Beliefs are facts
- about user or environment/context
- Desires and Intentions
- higher level expressions of beliefs and goals
- Devices reason over the BDI profiles
- Generate domains of interest and utility
functions - Change domains and utility functions based on
context
37MoGATU information manager (8)
- Problems
- Not all sources and data are correct/accurate/reli
able - No common sense
- Person can evaluate a web site based on how it
looks, a computer cannot - No centralized party that could verify peer
reliability or reliability of its data - Device is reliable, malicious, ignorant or
uncooperative - Distributed Belief
- Need to depend on other peers
- Evaluate integrity of peers and data based on
peer distributed belief - Detect which peer and what data is accurate
- Detect malicious peers
- Incentive model if A is malicious, it will be
excluded from the network
38MoGATU information manager (9)
- Distributed Belief Model
- Device sends a query to multiple peers
- Ask its vicinity for reputation of untrusted
peers that responded to the query - Trust a device only if trusted before or if
enough of trusted peers trust it - Use answers from (recommended to be) trusted
peers to determine answer - Update reputation/trust level for all devices
that responded - A trust level increases for devices that
responded according to final answer - A trust level decreases for devices that
responded in a conflicting way - Each devices builds a ring of trust
39A D, where is Bob?
A C, where is Bob?
A B, where is Bob?
40C A, Bob is at work.
B A, Bob is home.
D A, Bob is home.
41A B Bob at home, C Bob at work, D Bob at home
A I have enoughtrust in D. What about B and C?
42B I am not sure.
C I always do.
F I do.
E I dont.
A Do you trust C?
A I dont care what C says. I dont know enough
about B, but I trust D, E, and F. Together, they
dont trust C, so wont I.
D I dont.
43B I do.
C I never do.
F I am not sure.
E I do.
A Do you trust B?
A I dont care what B says. I dont trust C,
but I trust D, E, and F. Together, they trust B
a little, so will I.
D I am not sure.
44A I trust B and D, both say Bob ishome
A Increase trust in D.
A Decrease trust in C.
A Increase trust in B.
A Bob is home!
45MoGATU information manager (10)
- Distributed Belief Model
- Initial Trust Function
- Positive, negative, undecided
- Trust Learning Function
- Blindly , Blindly -, F/S-, S/F-, F/F-, S/S-,
Exp - Trust Weighting Function
- Multiplication, cosine
- Accuracy Merging Function
- Max, min, average
46experiments
- Primary goal of distributed belief
- Improve query processing accuracy by using
trusted sources and trusted data - Problems
- Not all sources and data are correct/accurate/reli
able - No centralized party that could verify peer
reliability or reliability of its data - Need to depend on other peers
- No common sense
- Person can evaluate a web site based on how it
looks, a computer cannot - Solution
- Evaluate integrity of peers and data based on
peer distributed belief - Detect which peer and what data is accurate
- Detect malicious peers
- Incentive model if A is malicious, it will be
excluded from the network
47experiments
- Devices
- Reliable (Share reliable data only)
- Malicious (Try to share unreliable data as
reliable) - Ignorant (Have unreliable data but believe they
are reliable) - Uncooperative (Have reliable data, will not
share) - Model
- Device sends a query to multiple peers
- Ask its vicinity for reputation of untrusted
peers that responded to the query - Trust a device only if trusted before or if
enough of trusted peers trust it - Use answers from (recommended to be) trusted
peers to determine answer - Update reputation/trust level for all devices
that responded - A trust level increases for devices that
responded according to final answer - A trust level decreases for devices that
responded in a conflicting way
48experimental environment
- HOW
- Mogatu and GloMoSim
- Spatio-temporal environment
- 150 x 150 m2 field
- 50 nodes
- Random way-point mobility
- AODV
- Cache to hold 50 of global knowledge
- Trust-based LRU
- 50 minute each simulation run
- 800 questions-tuples
- Each device 100 random unique questions
- Each device 100 random unique answers not
matching its questions - Each device initially trusts 3-5 other devices
49experimental environment (2)
- Level of Dishonesty
- 0 100
- Dishonest device
- Never provides an honest answer
- Honest device
- Best effort
- Initial Trust Function
- Positive, negative, undecided
- Trust Learning Function
- Blindly , Blindly -, F/S-, S/F-, F/F-, S/S-,
Exp - Trust Weighting Function
- Multiplication, cosine
- Accuracy Merging Function
- Max, min, avg
- Trust and Distrust Convergence
- How soon are dishonest devices detected
50results
- Answer Accuracy vs. Trust Learning Functions
- Answer Accuracy vs. Accuracy Merging Functions
- Distrust Convergence vs. Dishonesty Level
51Answer Accuracy vs. Trust Learning Functions
- The effects of trust learning functions with an
initial optimistic trust for environments with
varying level of dishonesty. - The results are shown for ?, ?--, ?s, ?f, ?f,
?f-, and ?exp learning functions.
52Answer Accuracy vs. Trust Learning Functions (2)
- The effects of trust learning functions with an
initial pessimistic trust for environments with
varying level of dishonesty. - The results are shown for ?, ?--, ?s, ?f, ?f,
?f-, and ?exp learning functions.
53Answer Accuracy vs. Accuracy Merging Functions
- The effects of accuracy merging functions for
environments with varying level of dishonesty.
The results are shown for - (a) MIN using only-one (OO) final answer approach
- (b) MIN using \it highest-one (HO) final answer
approach - (c) MAX OO, (d) MAX HO, (e) AVG OO, and (f)
AVG HO.
54Distrust Convergence vs. Dishonesty Level
- Average distrust convergence period in seconds
for environments with varying level of
dishonesty. - The results are shown for ?, ?--, ?s, and ?f
trust learning functions with an initial optimal
trust strategy and for the same functions using
an undecided initial trust strategy for results
(e-h), respectively.
55http//ebiquity.umbc.edu/